84 research outputs found
Ego-motion and Surrounding Vehicle State Estimation Using a Monocular Camera
Understanding ego-motion and surrounding vehicle state is essential to enable
automated driving and advanced driving assistance technologies. Typical
approaches to solve this problem use fusion of multiple sensors such as LiDAR,
camera, and radar to recognize surrounding vehicle state, including position,
velocity, and orientation. Such sensing modalities are overly complex and
costly for production of personal use vehicles. In this paper, we propose a
novel machine learning method to estimate ego-motion and surrounding vehicle
state using a single monocular camera. Our approach is based on a combination
of three deep neural networks to estimate the 3D vehicle bounding box, depth,
and optical flow from a sequence of images. The main contribution of this paper
is a new framework and algorithm that integrates these three networks in order
to estimate the ego-motion and surrounding vehicle state. To realize more
accurate 3D position estimation, we address ground plane correction in
real-time. The efficacy of the proposed method is demonstrated through
experimental evaluations that compare our results to ground truth data
available from other sensors including Can-Bus and LiDAR
Recognition and 3D Localization of Pedestrian Actions from Monocular Video
Understanding and predicting pedestrian behavior is an important and
challenging area of research for realizing safe and effective navigation
strategies in automated and advanced driver assistance technologies in urban
scenes. This paper focuses on monocular pedestrian action recognition and 3D
localization from an egocentric view for the purpose of predicting intention
and forecasting future trajectory. A challenge in addressing this problem in
urban traffic scenes is attributed to the unpredictable behavior of
pedestrians, whereby actions and intentions are constantly in flux and depend
on the pedestrians pose, their 3D spatial relations, and their interaction with
other agents as well as with the environment. To partially address these
challenges, we consider the importance of pose toward recognition and 3D
localization of pedestrian actions. In particular, we propose an action
recognition framework using a two-stream temporal relation network with inputs
corresponding to the raw RGB image sequence of the tracked pedestrian as well
as the pedestrian pose. The proposed method outperforms methods using a
single-stream temporal relation network based on evaluations using the JAAD
public dataset. The estimated pose and associated body key-points are also used
as input to a network that estimates the 3D location of the pedestrian using a
unique loss function. The evaluation of our 3D localization method on the KITTI
dataset indicates the improvement of the average localization error as compared
to existing state-of-the-art methods. Finally, we conduct qualitative tests of
action recognition and 3D localization on HRI's H3D driving dataset
Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems
Predicting the future location of vehicles is essential for safety-critical
applications such as advanced driver assistance systems (ADAS) and autonomous
driving. This paper introduces a novel approach to simultaneously predict both
the location and scale of target vehicles in the first-person (egocentric) view
of an ego-vehicle. We present a multi-stream recurrent neural network (RNN)
encoder-decoder model that separately captures both object location and scale
and pixel-level observations for future vehicle localization. We show that
incorporating dense optical flow improves prediction results significantly
since it captures information about motion as well as appearance change. We
also find that explicitly modeling future motion of the ego-vehicle improves
the prediction accuracy, which could be especially beneficial in intelligent
and automated vehicles that have motion planning capability. To evaluate the
performance of our approach, we present a new dataset of first-person videos
collected from a variety of scenarios at road intersections, which are
particularly challenging moments for prediction because vehicle trajectories
are diverse and dynamic.Comment: To appear on ICRA 201
IL-4 induces the formation of multinucleated giant cells and expression of ?5 integrin in central giant cell lesion
It is now well established that IL-4 has a central role in the development of monocytes to multinucleated giant cells (MGCs) by inducing the expression of integrins on the surface of monocytes. The aim of this study was to investigate the potential role of IL-4 in induction of ?5 integrin expression in the peripheral blood samples of patients with giant cell granuloma. Monocytes were isolated from peripheral blood samples of patients with central giant cell granuloma (CGCG) and healthy controls using human Monocyte Isolation Kit II. Isolated monocytes were then cultured in the absence or presence of IL-4 (10 and 20 ng/mL), and following RNA extraction and cDNA synthesis, Real-time PCR was performed to determine the level of ?5 integrin expression. The formation of CGCGs and morphological analyses were done under light microscopy. For confirmation of CGCGs, immunocytochemistry technique was also carried out by anti-RANK (receptor-activator of NF-?B ligand) antibody. In both patient and control groups, ?5 levels were significantly enhanced by increasing the IL-4 dose from 10 to 20 ng/mL. In addition, these differences were significant between patient and control groups without IL-4 treatment. On the other hand, the number of cells which expressed RANK and therefore the number of giant cells were significantly higher in the patient group in comparison to controls, as assessed by immunohistochemistry evaluations. In this study, we showed an elevation in the expression levels of ?5 integrin when stimulated by IL-4. It is strongly indicated that this integrin acts as an important mediator during macrophage to macrophage fusion and development of giant cells
- …